Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            The deployment of deep learning-based malware detection systems has transformed cybersecurity, offering sophisticated pattern recognition capabilities that surpass traditional signature-based approaches. However, these systems introduce new vulnerabilities requiring systematic investigation. This chapter examines adversarial attacks against graph neural network-based malware detection systems, focusing on semantics-preserving methodologies that evade detection while maintaining program functionality. We introduce a reinforcement learning (RL) framework that formulates the attack as a sequential decision making problem, optimizing the insertion of no-operation (NOP) instructions to manipulate graph structure without altering program behavior. Comparative analysis includes three baseline methods: random insertion, hill-climbing, and gradient-approximation attacks. Our experimental evaluation on real world malware datasets reveals significant differences in effectiveness, with the reinforcement learning approach achieving perfect evasion rates against both Graph Convolutional Network and Deep Graph Convolutional Neural Network architectures while requiring minimal program modifications. Our findings reveal three critical research gaps: transitioning from abstract Control Flow Graph representations to executable binary manipulation, developing universal vulnerability discovery across different architectures, and systematically translating adversarial insights into defensive enhancements. This work contributes to understanding adversarial vulnerabilities in graph-based security systems while establishing frameworks for evaluating machine learning-based malware detection robustness.more » « lessFree, publicly-accessible full text available December 1, 2026
- 
            Free, publicly-accessible full text available June 23, 2026
- 
            Free, publicly-accessible full text available May 1, 2026
- 
            This paper presents a novel approach for classifying electrocardiogram (ECG) signals in healthcare applications using federated learning and stacked convolutional neural networks (CNNs). Our innovative technique leverages the distributed nature of federated learning to collaboratively train a high-performance model while preserving data privacy on local devices. We propose a stacked CNN architecture tailored for ECG data, effectively extracting discriminative features across different temporal scales. The evaluation confirms the strength of our approach, culminating in a final model accuracy of 98.6% after 100 communication rounds, significantly exceeding baseline performance. This promising result paves the way for accurate and privacy-preserving ECG classification in diverse healthcare settings, potentially leading to improved diagnosis and patient monitoring.more » « lessFree, publicly-accessible full text available December 17, 2025
- 
            Free, publicly-accessible full text available November 1, 2025
- 
            While network attacks play a critical role in many advanced persistent threat (APT) campaigns, an arms race exists between the network defenders and the adversary: to make APT campaigns stealthy, the adversary is strongly motivated to evade the detection system. However, new studies have shown that neural network is likely a game-changer in the arms race: neural network could be applied to achieve accurate, signature-free, and low-false-alarm-rate detection. In this work, we investigate whether the adversary could fight back during the next phase of the arms race. In particular, noticing that none of the existing adversarial example generation methods could generate malicious packets (and sessions) that can simultaneously compromise the target machine and evade the neural network detection model, we propose a novel attack method to achieve this goal. We have designed and implemented the new attack. We have also used Address Resolution Protocol (ARP) Poisoning and Domain Name System (DNS) Cache Poisoning as the case study to demonstrate the effectiveness of the proposed attack.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
